Summary:
Interpretability is widely recognized as essential in machine learning, yet optimization models remain largely opaque, limiting their adoption in high-stakes decision-making. While optimization provides mathematically rigorous solutions, the reasoning behind these solutions is often difficult to extract and communicate. This lack of transparency is particularly problematic in fields such as energy planning, healthcare, and resource allocation, where decision-makers require not only optimal solutions but also a clear understanding of trade-offs, constraints, and alternative options. To address these challenges, we propose a framework for interpretable optimization built on three key pillars. First, simplification and surrogate modeling reduce problem complexity while preserving decision-relevant structures, allowing stakeholders to engage with more intuitive representations of optimization models. Second, near-optimal solution analysis identifies alternative solutions that perform comparably to the optimal one, offering flexibility and robustness in decision-making while uncovering hidden trade-offs. Last, rationale generation ensures that solutions are explainable and actionable by providing insights into the relationships among variables, constraints, and objectives. By integrating these principles, optimization can move beyond black-box decision-making toward greater transparency, accountability, and usability. Enhancing interpretability strengthens both efficiency and ethical responsibility, enabling decision-makers to trust, validate, and implement optimization-driven insights with confidence.
Keywords: interpretable optimization; optimization; explainability; global sensitivity analysis; fitness landscape; near-optimal solutions; surrogate modeling; problem simplification; presolve; sensitivity analysis; modeling all alternatives; modeling to generate alternatives; ethics; rationale generation
JCR Impact Factor and WoS quartile: 2,500 - Q1 (2023)
DOI reference:
https://doi.org/10.3390/app15105732
Published on paper: May 2025.
Published on-line: May 2025.
Citation:
S. Lumbreras, P. Ciller, Interpretable Optimization: Why and How We Should Explain Optimization Models. Applied Sciences. Vol. 15, nº. 10, pp. 5732-1 - 5732-28, May 2025. [Online: May 2025]